To be able to edit code and run cells, you need to run the notebook yourself. Where would you like to run the notebook?

In the cloud (experimental)

Binder is a free, open source service that runs scientific notebooks in the cloud! It will take a while, usually 2-7 minutes to get a session.

On your computer

(Recommended if you want to store your changes.)

  1. Copy the notebook URL:
  2. Run Pluto

    (Also see: How to install Julia and Pluto)

  3. Paste URL in the Open box

Frontmatter

If you are publishing this notebook on the web, you can set the parameters below to provide HTML metadata. This is useful for search engines and social media.

Author 1

Start both clocks:

and wait 6 seconds

👀 Reading hidden code
md"""
### Start both clocks:
and wait 6 seconds
"""
202 μs
image/svg+xml image/svg+xml image/svg+xml speed:
👀 Reading hidden code
@bind reset Clock(Δt)
43.4 ms

Run superclock:

👀 Reading hidden code
md"""Run superclock: $(@bind do_clock html"<input type=checkbox>")"""
243 ms
👀 Reading hidden code
if do_clock === true
end
662 μs

Reached 0.0 ticks per second

👀 Reading hidden code
let
freq = num_ticks[] / Δt
num_ticks[] = 0

md"Reached **$(freq) ticks per second**"
end
6.8 ms
👀 Reading hidden code
num_ticks = Ref(0)
21.2 μs
Error message

UndefVarError: trigger not defined

Stack trace

Here is what happened, the most recent locations are first:

  1. let	trigger	num_ticks[] += 1
Be patient :)
let
trigger
num_ticks[] += 1
end
👀 Reading hidden code
---
3
Δt = 3
👀 Reading hidden code
13.6 μs
supertimer = html"""
<script>
let i = 0

const handle = setInterval(() => {
i += 1
currentScript.value = i
currentScript.dispatchEvent(new CustomEvent("input"))
}, 1)

invalidation.then(() => {
clearInterval(handle)
})

</script>
""";
👀 Reading hidden code
37.4 μs

homework 6, version 0

👀 Reading hidden code
202 μs

Submission by: Jazzy Doe (jazz@mit.edu)

👀 Reading hidden code
7.0 ms

Oopsie! You need to update Pluto to the latest version

Close Pluto, go to the REPL, and type:

julia> import Pkg
julia> Pkg.update("Pluto")
👀 Reading hidden code
109 μs

Homework 6: Epidemic modeling III

18.S191, fall 2020

This notebook contains built-in, live answer checks! In some exercises you will see a coloured box, which runs a test case on your code, and provides feedback based on the result. Simply edit the code, run it, and the check runs again.

For MIT students: there will also be some additional (secret) test cases that will be run as part of the grading process, and we will look at your notebook and write comments.

Feel free to ask questions!

👀 Reading hidden code
506 μs
# edit the code below to set your name and kerberos ID (i.e. email without @mit.edu)

student = (name = "Jazzy Doe", kerberos_id = "jazz")

# you might need to wait until all other cells in this notebook have completed running.
# scroll around the page to see what's up
👀 Reading hidden code
21.9 μs

Let's create a package environment:

👀 Reading hidden code
202 μs
begin
using Pkg
Pkg.activate(mktempdir())
Pkg.add([
Pkg.PackageSpec(name="PlutoUI", version="0.6.7-0.6"),
Pkg.PackageSpec(name="Plots", version="1.6-1"),
])

using Plots
gr()
using PlutoUI
end
👀 Reading hidden code
❔
  Activating new project at `/tmp/jl_bAU2nb`
    Updating registry at `~/.julia/registries/General.toml`
   Resolving package versions...
   Installed PlutoUI ─ v0.6.11
    Updating `/tmp/jl_bAU2nb/Project.toml`
  [91a5bcdd] + Plots v1.40.10
  [7f904dfe] + PlutoUI v0.6.11
    Updating `/tmp/jl_bAU2nb/Manifest.toml`
  [66dad0bd] + AliasTables v1.1.3
  [d1d4a3ce] + BitFlags v0.1.9
  [d360d2e6] + ChainRulesCore v1.25.1
  [9e997f8a] + ChangesOfVariables v0.1.9
  [944b1d66] + CodecZlib v0.7.8
  [35d6a980] + ColorSchemes v3.29.0
  [3da002f7] + ColorTypes v0.12.0
  [c3611d14] + ColorVectorSpace v0.11.0
  [5ae59095] + Colors v0.13.0
  [34da2185] + Compat v4.16.0
  [f0e56b4a] + ConcurrentUtilities v2.5.0
  [187b0558] + ConstructionBase v1.5.8
  [d38c429a] + Contour v0.6.3
  [9a962f9c] + DataAPI v1.16.0
  [864edb3b] + DataStructures v0.18.20
  [ffbed154] + DocStringExtensions v0.9.3
  [460bff9d] + ExceptionUnwrapping v0.1.11
  [c87230d0] + FFMPEG v0.4.2
  [53c48c17] + FixedPointNumbers v0.8.5
  [1fa38f19] + Format v1.3.7
  [28b8d3ca] + GR v0.73.6
  [42e2da0e] + Grisu v1.0.2
  [cd3eb016] + HTTP v1.10.15
  [3587e190] + InverseFunctions v0.1.17
  [92d709cd] + IrrationalConstants v0.2.4
  [1019f520] + JLFzf v0.1.9
  [692b3bcd] + JLLWrappers v1.7.0
  [682c06a0] + JSON v0.21.4
  [b964fa9f] + LaTeXStrings v1.4.0
  [23fbe1c1] + Latexify v0.16.6
  [2ab3a3ac] + LogExpFunctions v0.3.28
  [e6f89c97] + LoggingExtras v1.1.0
  [1914dd2f] + MacroTools v0.5.15
  [739be429] + MbedTLS v1.1.9
  [442fdcdd] + Measures v0.3.2
  [e1d29d7a] + Missings v1.2.0
  [77ba4419] + NaNMath v1.0.3
  [4d8831e6] + OpenSSL v1.4.3
  [bac558e1] + OrderedCollections v1.8.0
  [69de0a69] + Parsers v2.8.1
  [b98c9c47] + Pipe v1.3.0
  [ccf2f8ad] + PlotThemes v3.3.0
  [995b91a9] + PlotUtils v1.4.3
  [91a5bcdd] + Plots v1.40.10
  [7f904dfe] + PlutoUI v0.6.11
  [aea7be01] + PrecompileTools v1.2.1
  [21216c6a] + Preferences v1.4.3
  [43287f4e] + PtrArrays v1.3.0
  [3cdcf5f2] + RecipesBase v1.3.4
  [01d81517] + RecipesPipeline v0.6.12
  [189a3867] + Reexport v1.2.2
  [05181044] + RelocatableFolders v1.0.1
  [ae029012] + Requires v1.3.1
  [6c6a2e73] + Scratch v1.2.1
  [992d4aef] + Showoff v1.0.3
  [777ac1f9] + SimpleBufferStream v1.2.0
  [a2af1166] + SortingAlgorithms v1.2.1
  [860ef19b] + StableRNGs v1.0.2
  [82ae8749] + StatsAPI v1.7.0
  [2913bbd2] + StatsBase v0.34.4
  [fd094767] + Suppressor v0.2.8
  [62fd8b95] + TensorCore v0.1.1
  [3bb67fe8] + TranscodingStreams v0.11.3
  [5c2747f8] + URIs v1.5.1
  [1cfade01] + UnicodeFun v0.4.1
  [1986cc42] + Unitful v1.22.0
  [45397f5d] + UnitfulLatexify v1.6.4
  [41fe7b60] + Unzip v0.2.0
  [6e34b625] + Bzip2_jll v1.0.9+0
  [83423d85] + Cairo_jll v1.18.2+1
  [ee1fde0b] + Dbus_jll v1.14.10+0
  [2702e6a9] + EpollShim_jll v0.0.20230411+1
  [2e619515] + Expat_jll v2.6.5+0
  [b22a6f82] + FFMPEG_jll v4.4.4+1
  [a3f928ae] + Fontconfig_jll v2.15.0+0
  [d7e528f0] + FreeType2_jll v2.13.3+1
  [559328eb] + FriBidi_jll v1.0.16+0
  [0656b61e] + GLFW_jll v3.4.0+2
  [d2c73de3] + GR_jll v0.73.6+0
  [78b55507] + Gettext_jll v0.21.0+0
  [7746bdde] + Glib_jll v2.82.4+0
  [3b182d85] + Graphite2_jll v1.3.14+1
  [2e76f6c2] + HarfBuzz_jll v8.5.0+0
  [aacddb02] + JpegTurbo_jll v3.1.1+0
  [c1c5ebd0] + LAME_jll v3.100.2+0
  [88015f11] + LERC_jll v3.0.0+1
  [1d63c593] + LLVMOpenMP_jll v18.1.7+0
  [dd4b983a] + LZO_jll v2.10.3+0
  [e9f186c6] + Libffi_jll v3.2.2+2
  [d4300ac3] + Libgcrypt_jll v1.11.0+0
  [7e76a0d4] + Libglvnd_jll v1.7.0+0
  [7add5ba3] + Libgpg_error_jll v1.51.1+0
  [94ce4f54] + Libiconv_jll v1.18.0+0
  [4b2f31a3] + Libmount_jll v2.40.3+0
  [89763e89] + Libtiff_jll v4.5.1+1
  [38a345b3] + Libuuid_jll v2.40.3+0
  [e7412a2a] + Ogg_jll v1.3.5+1
  [458c3c95] + OpenSSL_jll v3.0.16+0
  [91d4177d] + Opus_jll v1.3.3+0
  [36c8627f] + Pango_jll v1.56.1+0
  [30392449] + Pixman_jll v0.43.4+0
  [c0090381] + Qt6Base_jll v6.7.1+1
  [a44049a8] + Vulkan_Loader_jll v1.3.243+0
  [a2964d1f] + Wayland_jll v1.21.0+2
  [2381bf8a] + Wayland_protocols_jll v1.36.0+0
  [02c8fc9c] + XML2_jll v2.13.6+1
  [aed1982a] + XSLT_jll v1.1.42+0
  [ffd25f8a] + XZ_jll v5.6.4+1
  [f67eecfb] + Xorg_libICE_jll v1.1.1+0
  [c834827a] + Xorg_libSM_jll v1.2.4+0
  [4f6342f7] + Xorg_libX11_jll v1.8.6+3
  [0c0b7dd1] + Xorg_libXau_jll v1.0.12+0
  [935fb764] + Xorg_libXcursor_jll v1.2.3+0
  [a3789734] + Xorg_libXdmcp_jll v1.1.5+0
  [1082639a] + Xorg_libXext_jll v1.3.6+3
  [d091e8ba] + Xorg_libXfixes_jll v6.0.0+0
  [a51aa0fd] + Xorg_libXi_jll v1.8.2+0
  [d1454406] + Xorg_libXinerama_jll v1.1.5+0
  [ec84b674] + Xorg_libXrandr_jll v1.5.4+0
  [ea2f1a96] + Xorg_libXrender_jll v0.9.11+1
  [14d82f49] + Xorg_libpthread_stubs_jll v0.1.2+0
  [c7cfdc94] + Xorg_libxcb_jll v1.17.0+3
  [cc61e674] + Xorg_libxkbfile_jll v1.1.2+1
  [e920d4aa] + Xorg_xcb_util_cursor_jll v0.1.4+0
  [12413925] + Xorg_xcb_util_image_jll v0.4.0+1
  [2def613f] + Xorg_xcb_util_jll v0.4.0+1
  [975044d2] + Xorg_xcb_util_keysyms_jll v0.4.0+1
  [0d47668e] + Xorg_xcb_util_renderutil_jll v0.3.9+1
  [c22f9ab0] + Xorg_xcb_util_wm_jll v0.4.1+1
  [35661453] + Xorg_xkbcomp_jll v1.4.6+1
  [33bec58e] + Xorg_xkeyboard_config_jll v2.39.0+0
  [c5fb5394] + Xorg_xtrans_jll v1.5.1+0
  [3161d3a3] + Zstd_jll v1.5.7+1
  [35ca27e7] + eudev_jll v3.2.9+0
  [214eeab7] + fzf_jll v0.56.3+0
  [1a1c6b14] + gperf_jll v3.1.1+1
  [a4ae2306] + libaom_jll v3.11.0+0
  [0ac62f75] + libass_jll v0.15.2+0
  [1183f4f0] + libdecor_jll v0.2.2+0
  [2db6ffa8] + libevdev_jll v1.11.0+0
  [f638f0a6] + libfdk_aac_jll v2.0.3+0
  [36db933b] + libinput_jll v1.18.0+0
  [b53b4c65] + libpng_jll v1.6.47+0
  [f27f6e37] + libvorbis_jll v1.3.7+2
  [009596ad] + mtdev_jll v1.1.6+0
  [1270edf5] + x264_jll v2021.5.5+0
  [dfaa095f] + x265_jll v3.5.0+0
  [d8fb68d0] + xkbcommon_jll v1.4.1+2
  [0dad84c5] + ArgTools
  [56f22d72] + Artifacts
  [2a0f44e3] + Base64
  [ade2ca70] + Dates
  [8bb1440f] + DelimitedFiles
  [f43a241f] + Downloads
  [7b1f6079] + FileWatching
  [b77e0a4c] + InteractiveUtils
  [b27032c2] + LibCURL
  [76f85450] + LibGit2
  [8f399da3] + Libdl
  [37e2e46d] + LinearAlgebra
  [56ddb016] + Logging
  [d6f4376e] + Markdown
  [a63ad114] + Mmap
  [ca575930] + NetworkOptions
  [44cfe95a] + Pkg
  [de0858da] + Printf
  [3fa0cd96] + REPL
  [9a3f8284] + Random
  [ea8e919c] + SHA
  [9e88b42a] + Serialization
  [6462fe0b] + Sockets
  [2f01184e] + SparseArrays
  [10745b16] + Statistics
  [fa267f1f] + TOML
  [a4e569a6] + Tar
  [8dfed614] + Test
  [cf7118a7] + UUIDs
  [4ec0a83e] + Unicode
  [e66e0078] + CompilerSupportLibraries_jll
  [deac9b47] + LibCURL_jll
  [29816b5a] + LibSSH2_jll
  [c8ffd9c3] + MbedTLS_jll
  [14a3606d] + MozillaCACerts_jll
  [4536629a] + OpenBLAS_jll
  [05823500] + OpenLibm_jll
  [efcefdf7] + PCRE2_jll
  [83775a58] + Zlib_jll
  [8e850b90] + libblastrampoline_jll
  [8e850ede] + nghttp2_jll
  [3f19e933] + p7zip_jll
Precompiling project...
PlutoUI
  1 dependency successfully precompiled in 2 seconds (155 already precompiled)
9.6 s

We began this module with data on the COVID-19 epidemic, but then looked at mathematical models. How can we make the connection between data and models?

Models have parameters, such as the rate of recovery from infection. Where do the parameter values come from? Ideally we would like to extract them from data. The goal of this homework is to do this by fitting a model to data.

For simplicity we will use data that generated from the spatial model in Homework 5, rather than real-world data, and we will fit the simplest SIR model. But the same ideas apply more generally.

There are many ways to fit a function to data, but all must involve some form of optimization, usually minimization of a particular function, a loss function; this is the basis of the vast field of machine learning.

The loss function is a function of the model parameters; it measures how far the model output is from the data, for the given values of the parameters.

We emphasise that this material is pedagogical; there is no suggestion that these specific techniques should be used actual calculations; rather, it is the underlying ideas that are important.

👀 Reading hidden code
795 μs

Exercise 1: Calculus without calculus

👀 Reading hidden code
266 μs

Before we jump in to simulating the SIR equations, let's experiment with a simple 1D function. In calculus, we learn techniques for differentiating and integrating symbolic equations, e.g. ddxxn=nxn1. But in real applications, it is often impossible to apply these techniques, either because the problem is too complicated to solve symbolically, or because our problem has no symbolic expression, like when working with experimental results.

Instead, we use ✨ computers ✨ to approximate derivatives and integrals. Instead of applying rules to symbolic expressions, we use much simpler strategies that only use the output values of our function.

As a first example, we will approximate the derivative of a function. Our method is inspired by the analytical definition of the derivative!

f(a):=limh0f(a+h)f(a)h.

The finite difference method simply fixes a small value for h, say h=103, and then approximates the derivative as:

f(a)f(a+h)f(a)h.

👀 Reading hidden code
539 μs

Exercise 1.1 - tangent line

👉 Write a function finite_difference_slope that takes a function f and numbers a and h. It returns the slope f(a), approximated using the finite difference formula above.

👀 Reading hidden code
293 μs
finite_difference_slope (generic function with 2 methods)
function finite_difference_slope(f::Function, a, h=1e-3)
(f(a+h) - f(a)) / h
end
👀 Reading hidden code
747 μs
# function finite_difference_slope(f::Function, a, h=1e-3)
# return missing
# end
👀 Reading hidden code
8.6 μs
0.2
finite_difference_slope(sqrt, 4.0, 5.0)
👀 Reading hidden code
4.5 ms

Got it!

Let's move on to the next section.

👀 Reading hidden code
185 μs

👉 Write a function tangent_line that takes the same arguments f, a and g, but it returns a function. This function (RR) is the tangent line with slope f(a) (computed using finite_difference_slope) that passes through (a,f(a)).

👀 Reading hidden code
396 μs

Hint

Remember that functions are objects! For example, here is a function that returns the square root function:

function the_square_root_function()
	f = x -> sqrt(x)
	return f
end
👀 Reading hidden code
70.5 ms
tangent_line (generic function with 1 method)
function tangent_line(f, a, h)
slope = finite_difference_slope(f, a, h)
value = f(a)
x -> (x - a)*slope + value
end
👀 Reading hidden code
1.0 ms
# function tangent_line(f, a, h)
# return missing
# end
👀 Reading hidden code
8.1 μs

Got it!

Great! 🎉

👀 Reading hidden code
4.7 ms
👀 Reading hidden code
15.4 ms
let
p = plot(zeroten, wavy, label="f(x)")
scatter!(p, [a_finite_diff], [wavy(a_finite_diff)], label="a", color="red")
vline!(p, [a_finite_diff], label=nothing, color="red", linestyle=:dash)
scatter!(p, [a_finite_diff+h_finite_diff], [wavy(a_finite_diff+h_finite_diff)], label="a + h", color="green")
try
result = tangent_line(wavy, a_finite_diff, h_finite_diff)
plot!(p, zeroten, result, label="tangent", color="purple")
catch
end
plot!(p, xlim=(0,10), ylim=(-2, 8))
end |> as_svg
👀 Reading hidden code
1.2 s
# this is our test function
wavy(x) = .1x^3 - 1.6x^2 + 7x - 3;
👀 Reading hidden code
573 μs

The slider below controls h using a log scale. In the (mathematical) definition of the derivative, we take limh0. This corresponds to moving the slider to the left.

Notice that, as you decrease h, the tangent line gets more accurate, but what happens if you make h too small?

👀 Reading hidden code
355 μs
👀 Reading hidden code
113 ms
0.31622776601683794
👀 Reading hidden code
22.9 μs
👀 Reading hidden code
22.8 μs

Exercise 1.2 - antiderivative

In the finite differences method, we approximated the derivative of a function:

f(a)f(a+h)f(a)h

We can do something very similar to approximate the 'antiderivate' of a function. Finding the antiderivative means that we use the slope f to compute f numerically!

This antiderivative problem is illustrated below. The only information that we have is the slope at any point aR, and we have one initial value, f(1).

👀 Reading hidden code
759 μs

Using only this information, we want to reconstruct f.

By rearranging the equation above, we get the Euler method:

f(a+h)hf(a)+f(a)

Using this formula, we only need to know the value f(a) and the slope f(a) of a function at a to get the value at a+h. Doing this repeatedly can give us the value at a+2h, at a+3h, etc., all from one initial value f(a).

👉 Write a function euler_integrate_step that applies this formula to a known function f at a, with step size h and the initial value f(a). It returns the next value, f(a+h).

👀 Reading hidden code
630 μs















repeat('\n',fonsi) |> Text
👀 Reading hidden code
2.2 ms
15
fonsi = 15
👀 Reading hidden code
15.7 μs
# in this exercise, only the derivative is given
wavy_deriv(x) = .3x^2 - 3.2x + 7;
👀 Reading hidden code
501 μs
@bind a_euler Slider(zeroten, default=1)
👀 Reading hidden code
822 μs
@bind kljasdfkjlsdfkjldsfkjlfdskjl Slider(1:.01:10)
👀 Reading hidden code
14.9 ms
👀 Reading hidden code
181 ms
euler_integrate_step (generic function with 1 method)
function euler_integrate_step(fprime::Function, fa::Number,
a::Number, h::Number)
fa + h*fprime(a + h)
end
👀 Reading hidden code
493 μs
@bind ooo Slider(1:100000)
👀 Reading hidden code
6.6 ms
1
ooo
👀 Reading hidden code
11.1 μs
1
# function euler_integrate_step(fprime::Function, fa::Number,
# a::Number, h::Number)
# return missing
# end

1
👀 Reading hidden code
7.8 μs

Got it!

Good job!

👀 Reading hidden code
32.4 ms

👉 Write a function euler_integrate that takes takes a known function f, the initial value f(a) and a range T with a == first(T) and h == step(T). It applies the function euler_integrate_step repeatedly, once per entry in T, to produce the sequence of values f(a+h), f(a+2h), etc.

👀 Reading hidden code
271 μs
euler_integrate (generic function with 1 method)
function euler_integrate(fprime::Function, fa::Number,
T::AbstractRange)
a0 = T[1]
h = step(T)
accumulate(T, init=fa) do prev, a
euler_integrate_step(fprime, prev, a, h)
end
end
👀 Reading hidden code
1.2 ms
# function euler_integrate(fprime::Function, fa::Number,
# T::AbstractRange)
# a0 = T[1]
# h = step(T)
# return missing
# end
👀 Reading hidden code
8.3 μs

Let's try it out on f(x)=3x2 and T ranging from 0 to 10.

We already know the analytical solution f(x)=x3, so the result should be an array going from (approximately) 0.0 to 1000.0.

👀 Reading hidden code
295 μs
euler_test = let
fprime(x) = 3x^2
T = 0 : 0.1 : 10
euler_integrate(fprime, 0, T)
end
👀 Reading hidden code
108 ms

Got it!

Keep it up!

👀 Reading hidden code
12.4 ms
👀 Reading hidden code
650 μs
👀 Reading hidden code
409 ms

You see that our numerical antiderivate is not very accurate, but we can get a smaller error by choosing a smaller step size. Try it out!

There are also alternative integration methods that are more accurate with the same step size. Some methods also use the second derivative, other methods use multiple steps at once, etc.! This is the study of Numerical Methods.

👀 Reading hidden code
255 μs

Exercise 2: Simulating the SIR differential equations

Recall from the lectures that the ordinary differential equations (ODEs) for the SIR model are as follows:

s˙=βsii˙=+βsiγir˙=+γi

where s˙:=dsdt is the derivative of s with respect to time. Recall that s denotes the proportion (fraction) of the population that is susceptible, a number between 0 and 1.

We will use the simplest possible method to simulate these, namely the Euler method. The Euler method is not always a good method to solve ODEs accurately, but for our purposes it is good enough.

In the previous exercise, we introduced the euler method for a 1D function, which you can see as an ODE that only depends on time. For the SIR equations, we have an ODE that only depends on the previous value, not on time, and we have 3 equations instead of 1.

The solution is quite simple, we apply the euler method to each of the differential equations within a single time step to get new values for each of s, i and r at the end of the time step in terms of the values at the start of the time step. The euler discretised equations are:

s(t+h)=s(t)hβs(t)i(t)i(t+h)=i(t)+h(βs(t)i(t)γi(t))r(t+h)=r(t)+hγi(t)

👉 Implement a function euler_SIR_step(β, γ, sir_0, h) that performs a single Euler step for these equations with the given parameter values and initial values, with a step size h.

sir_0 is a 3-element vector, and you should return a new 3-element vector with the values after the timestep.

👀 Reading hidden code
951 μs
euler_SIR_step (generic function with 1 method)
function euler_SIR_step(β, γ, sir_0::Vector, h::Number)
s, i, r = sir_0
return [
s - h * β*s*i,
i + h * (β*s*i - γ*i),
r + h * γ*i,
]
end
👀 Reading hidden code
960 μs
# function euler_SIR_step(β, γ, sir_0::Vector, h::Number)
# s, i, r = sir_0
# return [
# missing,
# missing,
# missing,
# ]
# end
👀 Reading hidden code
8.1 μs
euler_SIR_step(0.1, 0.05,
[0.99, 0.01, 0.00],
0.1)
👀 Reading hidden code
16.1 μs

👉 Implement a function euler_SIR(β, γ, sir_0, T) that applies the previously defined function over a time range T.

You should return a vector of vectors: a 3-element vector for each point in time.

👀 Reading hidden code
263 μs
euler_SIR (generic function with 1 method)
function euler_SIR(β, γ, sir_0::Vector, T::AbstractRange)
# T is a range, you get the step size and number of steps like so:
h = step(T)
return accumulate(T; init=sir_0) do sir, _
euler_SIR_step(β, γ, sir, h)
end
end
👀 Reading hidden code
1.2 ms
# function euler_SIR(β, γ, sir_0::Vector, T::AbstractRange)
# # T is a range, you get the step size and number of steps like so:
# h = step(T)
# num_steps = length(T)
# return missing
# end
👀 Reading hidden code
8.8 μs
0.0:0.1:60.0
sir_T = 0 : 0.1 : 60.0
👀 Reading hidden code
15.7 μs
sir_results = euler_SIR(0.3, 0.15,
[0.99, 0.01, 0.00],
sir_T)
👀 Reading hidden code
56.2 ms

Let's plot s, i and r as a function of time.

👀 Reading hidden code
206 μs
👀 Reading hidden code
3.5 s
plot_sir! (generic function with 1 method)
👀 Reading hidden code
2.3 ms

👉 Do you see an epidemic outbreak (i.e. a rapid growth in number of infected individuals, followed by a decline)? What happens after a long time? Does everybody get infected?

👀 Reading hidden code
222 μs

blabla

default_SIR_parameters_observation = md"""
blabla
"""
👀 Reading hidden code
266 μs

👉 Make an interactive visualization in which you vary β and γ. What relation should β and γ have for an epidemic outbreak to occur?

👀 Reading hidden code
203 μs

👀 Reading hidden code
66.0 μs

Exercise 3: Numerical gradient

For fitting we need optimization, and for optimization we will use derivatives (rates of change). In Exercise 1, we wrote a function finite_difference_slope(f, a) to approximate f(a). In this exercise we will write a function to compute partial derivatives.

👀 Reading hidden code
422 μs

Exercise 3.1

👉 Write functions ∂x(f, a, b) and ∂y(f, a, b) that calculate the partial derivatives fx and fy at (a,b) of a function f:R2R (i.e. a function that takes two real numbers and returns one real).

Recall that fx is the derivative of the single-variable function g(x):=f(x,b) obtained by fixing the value of y to b.

You should use anonymous functions for this. These have the form x -> x^2, meaning "the function that sends x to x2".

👀 Reading hidden code
512 μs
∂x (generic function with 1 method)
function ∂x(f::Function, a, b)
directional = h -> f(a + h, b)
finite_difference_slope(directional, 0)
end
👀 Reading hidden code
991 μs
# function ∂x(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
7.5 μs
42.00699999999813
∂x(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
13.4 ms

Got it!

You got the right answer!

👀 Reading hidden code
171 μs
∂y (generic function with 1 method)
function ∂y(f::Function, a, b)
directional = h -> f(a, b + h)
finite_difference_slope(directional, 0)
end
👀 Reading hidden code
956 μs
# function ∂y(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
9.3 μs
1.0000000000047748
∂y(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
6.3 ms

Got it!

Well done!

👀 Reading hidden code
3.1 ms

Exercise 3.2

👉 Write a function gradient(f, a, b) that calculates the gradient of a function f at the point (a,b), given by the vector f(a,b):=(fx(a,b),fy(a,b)).

👀 Reading hidden code
344 μs
gradient (generic function with 1 method)
function gradient(f::Function, a, b)
[∂x(f, a, b), ∂y(f, a, b)]
end
👀 Reading hidden code
482 μs
# function gradient(f::Function, a, b)
# return missing
# end
👀 Reading hidden code
8.0 μs
gradient(
(x, y) -> 7x^2 + y,
3, 7
)
👀 Reading hidden code
21.4 ms

Got it!

Well done!

👀 Reading hidden code
205 μs

Exercise 4: Minimisation using gradient descent

In this exercise we will use gradient descent to find local minima of (smooth enough) functions.

To do so we will think of a function as a hill. To find a minimum we should "roll down the hill".

Exercise 4.1

We want to minimize a 1D function, i.e. a function f:RR. To do so we notice that the derivative tells us the direction in which the function increases. Positive slope means that the minimum is to the left, negative slope means to the right. So our gradient descent method is to take steps in the opposite direction, of a small size ηf(x0).

👉 Write a function gradient_descent_1d_step(f, x0) that performs a single gradient descrent step, from the point x0 and using your function finite_difference_slope to approximate the derivative. The result should be the next guess for x.

👀 Reading hidden code
787 μs
gradient_descent_1d_step (generic function with 1 method)
function gradient_descent_1d_step(f, x0; η=0.01)
slope = finite_difference_slope(f, x0)
x0 - η*slope
end
👀 Reading hidden code
1.4 ms
# function gradient_descent_1d_step(f, x0; η=0.01)
# return missing
# end
👀 Reading hidden code
8.3 μs
4.899989999999974
let
f = x -> x^2
# the minimum is at 0, so we should take a small step to the left
gradient_descent_1d_step(f, 5)
end
👀 Reading hidden code
13.8 ms

Got it!

Let's move on to the next section.

👀 Reading hidden code
26.0 ms
👀 Reading hidden code
697 μs
👀 Reading hidden code
142 ms

x0= -1

👀 Reading hidden code
18.5 ms
gradient_1d_viz (generic function with 1 method)
👀 Reading hidden code
2.9 ms

👉 Write a function gradient_descent_1d(f, x0) that repeatedly applies the previous function (N_steps times), starting from the point x0, like in the vizualisation above. The result should be the final guess for x.

👀 Reading hidden code
230 μs
gradient_descent_1d (generic function with 1 method)
function gradient_descent_1d(f, x0; η=0.01, N_steps=1000)
reduce(1:N_steps; init=x0) do old, _
gradient_descent_1d_step(f, old; η=η)
end
end
👀 Reading hidden code
2.3 ms
# function gradient_descent_1d(f, x0; η=0.01, N_steps=1000)
# return missing
# end
👀 Reading hidden code
8.6 μs
4.999499991586009
let
f = x -> (x - 5)^2 - 3
# minimum should be at x = 5
gradient_descent_1d(f, 0.0)
end
👀 Reading hidden code
28.9 ms

Got it!

Awesome!

👀 Reading hidden code
51.7 ms

Right now we take a fixed number of steps, even if the minimum is found quickly. What would be a better way to decide when to end the function?

👀 Reading hidden code
197 μs

blabla

better_stopping_idea = md"""
blabla
"""
👀 Reading hidden code
260 μs

Exericse 4.2

Multivariable calculus tells us that the gradient f(a,b) at a point (a,b) is the direction in which the function increases the fastest. So again we should take a small step in the opposite direction. Note that the gradient is a vector which tells us which direction to move in the plane (a,b). We multiply this vector with the scalar η to control the step size.

👉 Write functions gradient_descent_2d_step(f, x0, y0) and gradient_descent_2d(f, x0, y0) that do the same for functions f(x,y) of two variables.

👀 Reading hidden code
505 μs
gradient_descent_2d_step (generic function with 1 method)
function gradient_descent_2d_step(f, x0, y0; η=0.01)
antidirection = gradient(f, x0, y0)

[x0, y0] .- η * antidirection
end
👀 Reading hidden code
1.5 ms
gradient_descent_2d (generic function with 1 method)
function gradient_descent_2d(f, x0, y0; η=0.01, N_steps=1000)
reduce(1:N_steps; init=(x0, y0)) do old, _
gradient_descent_2d_step(f, old...; η=η)
end
end
👀 Reading hidden code
2.2 ms
# function gradient_descent_2d_step(f, x0, y0; η=0.01)
# return missing
# end
👀 Reading hidden code
8.3 μs
# function gradient_descent_2d(f, x0, y0; η=0.01)
# return missing
# end
👀 Reading hidden code
9.1 μs
gradient_descent_2d(himmelbau, 0, 0)
👀 Reading hidden code
321 ms
@bind N_gradient_2d Slider(0:20)
👀 Reading hidden code
663 μs
👀 Reading hidden code
759 ms

x0= 0

👀 Reading hidden code
1.7 ms

y0= 0

👀 Reading hidden code
1.0 ms
himmelbau (generic function with 1 method)
himmelbau(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2
👀 Reading hidden code
643 μs

We also prepared a 3D visualisation if you like! It's a bit slow...

👀 Reading hidden code
200 μs
false
run_3d_visualisation = false
👀 Reading hidden code
12.0 μs
👀 Reading hidden code
43.5 μs
gradient_2d_viz_3d (generic function with 1 method)
👀 Reading hidden code
2.8 ms
gradient_2d_viz_2d (generic function with 1 method)
👀 Reading hidden code
2.1 ms

👉 Can you find different minima?

👀 Reading hidden code
199 μs

👀 Reading hidden code
68.5 μs

Exercise 5: Learning parameter values

In this exercise we will apply gradient descent to fit a simple function y=fα,β(x) to some data given as pairs (xi,yi). Here α and β are parameters that appear in the form of the function f. We want to find the parameters that provide the best fit, i.e. the version fα,β of the function that is closest to the data when we vary α and β.

To do so we need to define what "best" means. We will define a measure of the distance between the function and the data, given by a loss function, which itself depends on the values of α and β. Then we will minimize the loss function over α and β to find those values that minimize this distance, and hence are "best" in this precise sense.

The iterative procedure by which we gradually adjust the parameter values to improve the loss function is often called machine learning or just learning, since the computer is "discovering" information in a gradual way, which is supposed to remind us of how humans learn. [Hint: This is not how humans learn.]

Exercise 5.1 - 🎲 frequencies

We generate a small dataset by throwing 10 dice, and counting the sum. We repeat this experiment many times, giving us a frequency distribution in a familiar shape.

👀 Reading hidden code
876 μs
import Statistics
👀 Reading hidden code
286 μs
dice_frequencies (generic function with 1 method)
👀 Reading hidden code
2.2 ms
dice_x, dice_y = dice_frequencies(10, 20_000)
👀 Reading hidden code
1.7 ms
👀 Reading hidden code
140 ms

Let's try to fit a gaussian (normal) distribution. Its PDF with mean μ and standard deviation σ is

fμ,σ(x):=1σ2πexp[(xμ)22σ2]

👉 (Not graded) Manually fit a Gaussian distribution to our data by adjusting μ and σ until you find a good fit.

👀 Reading hidden code
324 μs

μ = 24.0

👀 Reading hidden code
15.3 ms

σ = 12

👀 Reading hidden code
1.2 ms
23.0
asdf = guess_μ - 1
👀 Reading hidden code
13.1 μs
4.795831523312719
ppppp = let
sqrt(asdf)
end
👀 Reading hidden code
20.4 μs
opop = [
ppppp for i in 1:2
]
👀 Reading hidden code
21.2 μs

👀 Reading hidden code
66.1 μs

Show manual fit:

👀 Reading hidden code
43.6 ms
gauss (generic function with 1 method)
function gauss(x, μ, σ)
(1 / (sqrt(2π) * σ)) * exp(-(x-μ)^2 / σ^2 / 2)
end
👀 Reading hidden code
669 μs

What we just did was adjusting the function parameters until we found the best possible fit. Let's automate this process! To do so, we need to quantify how good or bad a fit is.

👉 Define a loss function to measure the "distance" between the actual data and the function. It will depend on the values of μ and σ that you choose:

L(μ,σ):=i[fμ,σ(xi)yi]2

👀 Reading hidden code
356 μs
loss_dice (generic function with 1 method)
function loss_dice(μ, σ)
fit = gauss.(dice_x, [μ], [σ])
sum((fit - dice_y) .^ 2)
end
👀 Reading hidden code
656 μs
# function loss_dice(μ, σ)
# return missing
# end
👀 Reading hidden code
8.6 μs
false
loss_dice(guess_μ + 3, guess_σ) >
loss_dice(guess_μ, guess_σ)
👀 Reading hidden code
227 ms

👉 Use your gradient_descent_2d function to find a local minimum of L, starting with initial values μ=30 and σ=1. Call the found parameters found_μ and found_σ.

👀 Reading hidden code
246 μs
found_μ, found_σ = let
gradient_descent_2d(loss_dice, 30, 1, η=10)
end
👀 Reading hidden code
532 ms
# found_μ, found_σ = let
# # your code here
# missing, missing
# end
👀 Reading hidden code
8.9 μs

Go back to the graph to see your optimized gaussian curve!

If your fit is close, then probability theory tells us that the found parameter μ should be close to the weighted mean of our data, and σ should approximate the sample standard deviation. We have already computed these values, and we check how close they are:

👀 Reading hidden code
345 μs
0.031535469100603564
abs(stats_μ - found_μ)
👀 Reading hidden code
18.3 μs
0.0698042507869081
abs(stats_σ - found_σ)
👀 Reading hidden code
16.5 μs

Got it!

Great!

👀 Reading hidden code
200 μs
35.00915
👀 Reading hidden code
69.8 ms
5.410671518166677
👀 Reading hidden code
132 ms

Exercise 6: Putting it all together — fitting an SIR model to data

In this exercise we will fit the (non-spatial) SIR ODE model from Exercise 1 to some data generated from the spatial model in Problem Set 4. If we are able to find a good fit, that would suggest that the spatial aspect "does not matter" too much for the dynamics of these models. If the fit is not so good, perhaps there is an important effect of space. (As usual in statistics, and indeed in modelling in general, we should be very cautious of making claims of this nature.)

This fitting procedure will be different from that in Exercise 4, however: we no longer have an explicit form for the function that we are fitting – rather, it is the output of an ODE! So what should we do?

We will try to find the parameters β and γ for which the output of the ODEs when we simulate it with those parameters best matches the data!

Exercise 6.1

Below the result from Homework 4, Exercise 3.2. These are the average S, I, R fractions of running 20 simulations. Click on it!

👀 Reading hidden code
614 μs
1:1000
👀 Reading hidden code
16.4 μs
👀 Reading hidden code
200 ms
👀 Reading hidden code
794 μs
👀 Reading hidden code
343 ms

👉 (Not graded) Manually fit the SIR curves to our data by adjusting β and γ until you find a good fit.

👀 Reading hidden code
254 μs

β = 0.05

👀 Reading hidden code
1.2 ms

γ = 0.005

👀 Reading hidden code
1.0 ms

Show manual fit:

👀 Reading hidden code
720 μs

👉 To do this automatically, we will again need to define a loss function L(β,γ). This will compare the solution of the SIR equations with parameters β and γ with your data.

This time, instead of comparing two vectors of numbers, we need to compare two vectors of vectors, the S, I, R values.

👀 Reading hidden code
328 μs
loss_sir (generic function with 1 method)
function loss_sir(β, γ)
results = euler_SIR(β, γ,
[0.99, 0.01, 0.00],
hw4_T)
sum(results .- hw4_results) do diff
sum(diff .^ 2)
end
end
👀 Reading hidden code
1.0 ms
# function loss_sir(β, γ)
# return missing
# end
👀 Reading hidden code
8.3 μs
350.1798977340281
loss_sir(guess_β, guess_γ)
👀 Reading hidden code
214 ms

👉 Use this loss function to find the optimal parameters β and γ.

👀 Reading hidden code
205 μs
found_β, found_γ = let
gradient_descent_2d(loss_sir, guess_β, guess_γ; η=1e-8)
end
👀 Reading hidden code
665 ms
# found_β, found_γ = let
# # your code here
# missing, missing
# end
👀 Reading hidden code
8.3 μs

Got it!

If you made it this far, congratulations – you have just taken your first step into the exciting world of scientific machine learning!

👀 Reading hidden code
1.8 ms

Before you submit

Remember to fill in your name and Kerberos ID at the top of this notebook.

👀 Reading hidden code
459 μs
Error message

cannot assign a value to variable PlutoUI.as_svg from module workspace#3

Stack trace

Here is what happened, the most recent locations are first:

  1. eval
"""
Return the argument, but force it to be shown as SVG.

This is an optimization for Plots.jl GR plots: it makes them less jittery and keeps the page DOM small.
"""
as_svg = as_mime(MIME"image/svg+xml"())
👀 Reading hidden code
---
as_mime (generic function with 1 method)
👀 Reading hidden code
856 μs

Function library

Just some helper functions used in the notebook.

👀 Reading hidden code
245 μs
hint (generic function with 1 method)
👀 Reading hidden code
478 μs
almost (generic function with 1 method)
👀 Reading hidden code
514 μs
still_missing (generic function with 2 methods)
👀 Reading hidden code
864 μs
keep_working (generic function with 2 methods)
👀 Reading hidden code
826 μs
👀 Reading hidden code
11.4 ms
correct (generic function with 2 methods)
👀 Reading hidden code
721 μs
not_defined (generic function with 1 method)
👀 Reading hidden code
773 μs
TODO
👀 Reading hidden code
36.4 μs